Usability testing of a multimedia e-learning resource for electrolyte and acid-base disorders

نویسندگان

  • Mogamat Razeen Davids
  • Usuf M. E. Chikte
  • Karen Grimmer-Somers
  • Mitchell L. Halperin
چکیده

The usability of computer interfaces may have a major influence on learning. Design approaches that optimize usability are commonplace in the software development industry but are seldom used in the development of e-learning resources, especially in medical education. We conducted a usability evaluation of a multimedia resource for teaching electrolyte and acid-base disorders by studying the interaction of 15 medical doctors with the application. Most of the usability problems occurred in an interactive treatment simulation, which was completed successfully by only 20% of participants. A total of 27 distinct usability problems were detected, with 15 categorized as serious. No differences were observed with respect to usability problems detected by junior doctors as compared with more experienced colleagues. Problems were related to user information and feedback, the visual layout, match with the real world, error prevention and management, and consistency and standards. The resource was therefore unusable for many participants; this is in contrast to good scores previously reported for subjective user satisfaction. The findings suggest that the development of e-learning materials should follow an iterative design-and-test process that includes routine usability evaluation. User testing should include the study of objective measures and not rely only on self-reported measures of satisfaction. Introduction e-Learning is considered to be as effective as educational interventions delivered by traditional media (Chumley-Jones, Dobbie & Alford, 2002; Cook et al, 2008) and has rapidly become part of the medical education mainstream (Ellaway & Masters, 2008). Creative educators are increasingly using animation, simulations and virtual 3-D learning environments (Hansen, 2008) to create engaging learning resources for students and health-care professionals. Virtual patients, for instance, hold particular promise for assisting in the development of clinical reasoning ability (Cook & Triola, 2009). Developing innovative e-learning materials can be expensive and time-consuming. A survey of virtual patient development at US and Canadian medical schools revealed that the cases took an average of 16.6 months to complete and that 85% of them cost over $10 000 (Huang, Reynolds & Candler, 2007). It is therefore important to maximize the educational impact of these British Journal of Educational Technology (2013) doi:10.1111/bjet.12042 © 2013 British Educational Research Association resources. One aspect that has not been sufficiently emphasized in the implementation of effective e-learning is the usability of the technology interface. This has a major impact on learning and should be considered when designing e-learning resources (Sandars, 2010; Zaharias, 2009). Usability is a concept from the field of human–computer interaction that describes the ease with which a technology interface can be used. The International Standard, ISO 9241-11, defines it as the “Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (Abran, Khelifi, Suryn & Seffah, 2003). A user interface should be so intuitive and self-evident that even inexperienced users can accomplish tasks successfully (Krug, 2006). High usability of learning resources is essential, though of course not sufficient, to achieving the desired educational impact (Sandars & Lafferty, 2010). This is likely to be especially relevant when the subject matter is complex and contains multiple interacting elements (Sweller, 2010). Such material presents a heavy intrinsic cognitive load in view of the limited capacity of working memory and is often perceived as difficult to learn. Poorly designed user interfaces can present an additional, extraneous cognitive load, as the user has to struggle with challenging content as well as with the technology interface. Reducing extraneous cognitive load has been shown to lead to large gains in learning efficiency (Mayer & Moreno, 2003; van Merrienboer & Sweller, 2010); optimizing the usability of e-learning resources therefore seems essential. Iterative methodologies that include the routine evaluation of usability are common in the software development industry (Bygstad, Ghinea & Brevik, 2008; Holzinger, Errath, Searle, Thurnher & Slany, 2005; Mao, Vredenburg, Smith & Carey, 2005; Sohaib & Khan, 2010). As far back as the mid-80s, Gould and Lewis (1985) recommended the following design principles: an early focus on users and their tasks; empirical user testing starting early in the development process; and an iterative approach using cycles of design, testing and redesign until the application meets performance and usability goals. This approach is seldom used in the development and evaluation of e-learning resources, especially in medical education (Sandars, 2010). There are two main categories of usability evaluation techniques: empirical user testing involves studying typical end-users interacting with the application while usability inspection methods involves Practitioner Notes What is already known about this topic • The usability of computer interfaces may have a major influence on learning. • While design approaches that optimize usability are common in the software development industry, this is not the case with e-learning, especially in the area of medical education. What this paper adds • Neglecting the evaluation of usability may lead to the implementation of e-learning materials with poor usability, with failure to achieve desired educational outcomes. • The results of objective user testing do not correlate well with evaluations based on self-reported user satisfaction. Implications for practice and/or policy • e-Learning development should include routine usability evaluation and follow an iterative design-test-redesign approach. • Usability evaluation should include observing typical end-users interacting with the system and not be based only on subjective ratings of user satisfaction. 2 British Journal of Educational Technology © 2013 British Educational Research Association experts evaluating the application against a set of rules or design principles (Dumas & Salzman, 2006). Selecting which methods and measures to use when evaluating an e-learning resource remains difficult. For example, we can evaluate usability, learner interactions, learner perceptions or learning outcomes, we can collect subjective or objective data, qualitative or quantitative data, and we can make use of experts or we can involve typical end-users (Dyson & Campello, 2003). Even if the focus is on usability as in this study, different approaches are available and each will have their own resource requirements, examine a particular aspect of usability and detect different usability problems. A common recommendation is to combine methods whenever resources allow and to alternate between inspection by experts and end-user testing. User testing usually involves participants being asked to think aloud as they interact with the system being tested. Evaluations may be conducted in settings ranging from sophisticated usability laboratories to informal settings employing paper prototypes (Snyder, 2003). User testing has been rated by usability professionals as having a greater impact on product development than inspection methods, although the latter is also very commonly used (Mao et al, 2005; Rosenbaum, Rohn & Humburg, 2000). Developers are less likely to question the validity of the results when usability problems are identified by real users rather than by experts (Dumas & Salzman, 2006). However, real users may be expensive and difficult to recruit and the recording, coding and analysis of testing sessions may also be expensive and time-consuming. Nielsen has popularized simpler methods, pointing out that any testing is better than not testing at all, and demonstrating that four to five users are sufficient for each cycle of testing (Nielsen, 2012). This “discount usability” approach (Nielsen, 2009) may be an efficient option for improving the process of developing e-learning materials. Inspection methods are often less expensive because they involve fewer people and can detect many problems in a limited amount of time. Evaluators may also suggest solutions to the problems they find. The most commonly used technique is heuristic evaluation, in which expert evaluators find usability problems by examining an interface and judging its compliance with well-established usability principles, called heuristics. The process is influenced by the skills of the evaluators, with the ideal evaluators being “double experts” at usability and the domain of the application being evaluated (Nielsen, 1992). However, such individuals may be difficult to find or very expensive to employ. Evaluators may also have their own biases regarding interface design or may have insufficient domain knowledge, causing domain-specific problems to be missed. They may miss problems that affect real users or identify many low priority problems that hardly affect real users. The raw data generated by an evaluation need to be transformed before it can be used to improve the user interface (Howarth, Andre & Hartson, 2007). Each occurrence of a usability problem encountered by a user or evaluator is a problem instance. All related instances must be recognized and consolidated into distinct problems, and the problems may then be categorized according to the interface elements involved, the severity of the problems or the design principles violated. See Table 1 for a set of widely used principles for guiding good interface design. Categorizing the problems in this way makes it easier to identify solutions to address them and also to prioritize them for fixing during the subsequent redesign process. We have developed a web-based learning resource to help students and practicing clinicians acquire expertise in the complex area of electrolyte, water and acid-base disorders, an area of medicine that students and clinicians find particularly difficult to master (Dawson-Saunders, Feltovich, Coulson & Steward, 1990). Patients with these disorders are usually encountered by doctors working in the fields of internal medicine or pediatrics, or in subdisciplines of these fields such as nephrology, endocrinology and intensive care medicine. Our Electrolyte Usability testing of an e-learning resource 3 © 2013 British Educational Research Association Workshop provides instruction and the opportunity to practice the treatment of electrolyte disorders through an interactive simulation. The application is freely accessible at http://www.learnphysiology.org/sim1/. The underlying teaching approach and the initial development of the Electrolyte Workshop have been described previously (Davids, Chikte & Halperin, 2011). The application was built in Flash® and involved several iterations of development and review by the authors and the development team. This informal review process by content experts and experienced developers detected and corrected many usability problems with the application. Self-reported end-user satisfaction with the completed application was good as judged by positive comments and high ratings on the System Usability Scale (Brooke, 1996). This paper reports on an evaluation that focuses on objective measures of usability obtained by observing, recording and analyzing the interaction of end-users with the application. The study did not address educational outcomes. Testing was conducted with doctors working in the field of internal medicine, our main target audience. The purpose was to determine how well our Electrolyte Workshop conforms to principles of good interface design and to inform further development. The study illustrates the importance of user testing in evaluating e-learning materials and, in particular, demonstrates the need to observe users and examine objective data rather than to rely solely on more easily obtained questionnaire data. Table 1: Principles of good interface design (heuristics). The first 10 are those proposed by Nielsen (2005), and the last is from Karat et al. (1992) Heuristic Descriptors 1. Visibility of system status; feedback Keep users informed through timely appropriate feedback. They always know where they are, which actions can be taken and how they can be performed. 2. Match with the real world—language, conventions Speak the users’ language, use familiar terms and concepts; follow real-world conventions. 3. Consistency and conformity to standards Words, situations and actions mean the same thing; application uses commonly accepted conventions and conforms to user expectations. 4. Minimize memory load; recognition rather than recall Objects, actions and options accessed easily. The user should not have to remember information from one part of the application to another. 5. Aesthetic and minimalist design No irrelevant information as it competes with relevant information and diminishes their relative visibility. Animation and transitions should be used sparingly. 6. Help and documentation It is better if the system can be used without documentation. If required it should be concise, easy to search and task-centered. 7. User control and freedom The user can control the direction and pace of the application. Clearly marked exits if they take wrong options by mistake. Support undo and redo. 8. Flexibility and efficiency of use Users can modify the application to suit their individual capabilities and needs, for example, by using shortcuts. 9. Error prevention and tolerance Careful design to prevent errors occurring. Despite user errors, the intended result may still be achieved by error correction or good error management. 10. Help users recognize, diagnose and recover from errors Error messages should be in plain language (no codes or jargon) and suggest a solution. 11. Intuitive visual layout Position elements on screen to be easily perceived and understandable, and visually attractive. 4 British Journal of Educational Technology © 2013 British Educational Research Association Methods Ethics approval for the study was granted by the Committee for Human Research at the Faculty of Health Sciences of Stellenbosch University (project no. N08/05/158). The e-learning resource The Electrolyte Workshop is built in Adobe® Flash® and consists of case-based tutorials. There are two sections: cases in the WalkThru section present a clinical problem, then demonstrate how an expert would analyze the data and make decisions about treatment. Animation is used to illustrate changes in body fluid compartment sizes, brain cell size and plasma sodium concentrations. The concept is “look and learn,” analogous to the use of worked-out examples in other disciplines (Renkl, 2005), which allows students to appreciate the underlying principles rather than being focused on finding solutions to the problem presented. Cases in the second section, called the HandsOn section, are interactive and include a treatment simulation where users can select from a menu of therapies and receive immediate feedback via animations and text messages. The HandsOn cases have introductory (“lead-in”) slides that set the scene for the treatment simulation. These slides contain important clinical and laboratory data that are needed to complete the treatment simulation. After successful completion of the simulation a summary slide is displayed containing several “take-home messages.” Currently the application contains only two cases, one in each section. The WalkThru case is that of a young girl with acute hyponatremia related to Ecstasy use, and the HandsOn case is that of chronic hyponatremia in a patient with Addison’s disease. Participants User testing was conducted with 15 doctors at an academic department of medicine. The group included 10 doctors who were undertaking postgraduate training in internal medicine (“registrars”) and 5 qualified specialists in internal medicine, nephrology and endocrinology. This group is typical of our target population. We considered that the specialists and registrars were likely to be different in terms of subject knowledge and experience, and therefore recruited 15 participants to allow us to include sufficient participants from both groups and also to improve the overall usability problem detection rate (Faulkner, 2003). User testing equipment and procedures The application was loaded onto two 15-inch laptop computers, each equipped with a mouse and a webcam with an integrated microphone. To facilitate the capture and analysis of information from each testing session we installed a usability software tool on each computer. We selected Morae® (http://www.techsmith.com) for this purpose because it is widely used and suited our requirements in terms of data collection and analysis options, cost and ease of use. Running unobtrusively in the background, it records all user interactions with a website or computer application. This includes the user’s voice, webcam video of facial expressions and video of all on-screen activity. It also captures data like mouse clicks and keyboard activity. Recordings are marked up to log the start and end of tasks, instances of usability problems, user comments and occasions when help was needed. Metrics like time, task completion rates, usability problem counts and mouse activity are readily generated. Participants received written instructions. They were required to work through the WalkThru and HandsOn cases and look carefully at the different panels on each slide. They were encouraged to try different options in the treatment simulation and were also asked to look at the glossary. No time limits were set. Usability testing of an e-learning resource 5 © 2013 British Educational Research Association Measures of usability For the purposes of evaluating usability the WalkThru case, the introductory slides of the HandsOn case, the treatment simulation of the HandsOn case, and the glossary were each regarded as a separate task. Binary task completion rates and the detection of usability problems were recorded for each task as measures of effectiveness. Time on task and input device activity (mouse clicks and mouse movement) was recorded for each task as measures of efficiency. Successful task completion in the WalkThru case and the introductory slides of the HandsOn case simply required that participants navigate through that section from beginning to end, viewing all the information available. For completion of the interactive treatment simulation in the HandsOn case participants had to treat their patient effectively by applying appropriate therapy at the correct dosages, and then exit the simulation to end with a summary “take home messages” slide. In the case of the glossary, participants were simply required to open it by clicking a text hyperlink on a slide or by using its navigation tab at the top of the screen. The usability problems detected by participants as they worked through the tasks were categorized by severity, the interface element involved and the design principle (heuristic) violated. Our definition of a serious usability problem is based on that of Nielsen (1997), which takes into account the impact, frequency and persistence of the problem; it refers to a problem that may cause unacceptable delays or even task failure for the user and which needs to be fixed before an application is released. Table 1 lists the heuristics we considered when analyzing the usability problems detected. They are based on those proposed by Nielsen (2005) and as used by Karat, Campbell and Fiegel (1992). Each problem identified was mapped to one or more heuristic. Statistical tests Binary task completion rates are reported as proportions, usability problems as counts, and time on task (in minutes) and mouse activity (clicks and movement in pixels) as means SD. For the comparisons between specialists and registrars, and between those participants who completed a task successfully and those who did not, Fisher’s exact test was used to compare proportions, and the Wilcoxon rank sum test was used to compare usability problem detection, time on task and mouse activity. The significance level was set at .05. Results User testing focused on measures of effectiveness and efficiency and yielded data that are described below and in Table 2. Although not the focus of this study, we also compared specialists Table 2: Measures of effectiveness: successful task completion rates and counts of usability problems detected by participants. Where the same problem was encountered by multiple participants these instances were merged to provide a count of unique or distinct problems Task completion All problems Serious problems Rate (%) Problem instances Distinct problems Problem instances Distinct problems Task 1: WalkThru case 15/15 (100) 4 4 1 1 Task 2: HandsOn lead-in slides 8/15 (53) 16 5 10 2 Task 3: HandsOn treatment simulation 3/15 (20) 44 18 34 12 Total 64 27 43 15 6 British Journal of Educational Technology © 2013 British Educational Research Association with registrars, and participants who completed a task successfully with those who did not, and summarize these results at the end of this section. Measures of effectiveness: task completion rates and usability problem detection (Table 2) Task completion rates Participants all completed the WalkThru case with ease. The lead-in section of the HandsOn case was completed successfully by eight participants (53% task completion rate) while the treatment simulation was completed successfully by only three participants (20%). The glossary was viewed by nine participants, none of whom experienced any usability problems while accessing this feature of the application. All of them opened the glossary by clicking its main navigation tab at the top of the screen and not via a text hyperlink on one of the slides. Usability problem detection A total of 27 distinct usability problems were identified, 15 of which were categorized as serious. A median of 4 problems were detected per participant, and in the case of the serious problems the median detection rate was 3 per participant. Table 3 contains a sampling of the serious usability problems detected, and lists the interface elements involved, the heuristics violated, as well as proposed solutions for addressing these problems. In the WalkThru case four distinct usability problems were detected: these related to user information and feedback (two problems), user control and freedom (unclear navigation, one problem) and match with the real world (a problem with case accuracy, one problem). The only error categorized as serious was the last mentioned, which violated the heuristic of matching with the real world. An animation showed fluid moving out of the intracellular fluid compartment then simply disappearing and not appearing in the extracellular fluid compartment (see the first line of Table 3 for details and Multimedia Appendix S1 for a video clip). In the lead-in section of HandsOn case a total of five distinct usability problems were identified (16 separate instances were recorded). They related to user information (one problem), the visual layout (two) and match with the real world (two). Two problems were categorized as serious: one was related to inadequate user information and the other to the heuristic of providing an intuitive visual layout. A sliding panel displaying important laboratory data opens on clicking its tab on the side of the screen (Figure 1). This sliding panel was completely missed by seven participants (47%). One of these participants worked through the case twice, and two others worked through it three times without discovering the panel (see line 2 of Table 3 for details and Multimedia Appendix S2 for a video clip). In the treatment simulation of the HandsOn case a total of 18 distinct usability problems were identified (44 separate instances were recorded). These were related to user information and feedback (five problems), visual layout (three), match with the real world (one), user control and freedom (one), consistency and conformity to standards (two), error prevention and tolerance (five) and error management (one). Twelve of these 18 problems were graded as serious, based on their impact and the frequency of their occurrence. The first serious usability problem identified in the treatment simulation related to the fidelity of the case and lack of clarity regarding the correct treatment (Table 3 line 3). Two participants, both experienced specialists, were not convinced of the need to apply any fluid therapy in this case of Addison’s disease. The most frequently encountered problem related to the heuristic of designing for error prevention and tolerance. There were repeated unsuccessful attempts by 10 participants (67%) to apply multiple treatments simultaneously (Table 3 line 4 and Multimedia Appendix S3). The simulation was designed to allow treatments to be applied sequentially, not simultaneously, so that feedback could be given after each step. Groups of treatment options are displayed in separate panels. Usability testing of an e-learning resource 7 © 2013 British Educational Research Association Ta bl e 3 : Se le ct ed ex am pl es of se ri ou s us ab ili ty pr ob le m s de te ct ed w it h th e in te rf ac e el em en t an d he ur is ti c in vo lv ed ,a nd pr op os ed m ea su re s to ad dr es s th em .T he fir st ex am pl e is fr om th e W al kT hr u ca se an d th e ot he rs fr om th e H an ds O n ca se .T he nu m be r of pa rt ic ip an ts en co un te ri ng a pa rt ic ul ar pr ob le m is in cl ud ed in co lu m n 1 . Q uo te s fr om pa rt ic ip an ts ar e in it al ic s E xa m pl es of us ab ili ty er ro rs In te rf ac e el em en t H eu ri st ic in vo lv ed So lu ti on Th e su m s do n’ t ad d up ;1 .8 L m ov ed ou t [o f th e in tr ac el lu la r flu id ] bu t I do n’ t se e it in th e ex tr ac el lu la r flu id !( 1 /1 5 ) W al kT h ru ca se :C as e ac cu ra cy M at ch w it h th e re al w or ld :l an gu ag e, co nv en ti on s, ca se ac cu ra cy R ev is e th e an im at io n to sh ow th e ex tr ac el lu la r flu id co m pa rt m en t in cr ea si n g in vo lu m e as 1 .8 L of w at er m ov es in to it fr om th e in tr ac el lu la r flu id co m pa rt m en t. Pa rt ic ip an ts do n ot n ot ic e th e sl id in g la b da ta pa n el ; th is pa n el co n ta in s im po rt an t in fo rm at io n on bl oo d an d u ri n e ch em is tr y (9 /1 5 ) Le ad -i n sl id es :l ab da ta ta b In tu it iv e vi su al la yo u t R ed es ig n th e in te rf ac e to av oi d u si n g th e sl id in g pa n el — gr ou p al lr el at ed da ta an d di sp la y in pl ai n vi ew in th e le ft pa n el . In th is pa ti en t w it h A dd is on ’s ,w hy ca n’ t yo u st ar t w it h on ly th e m in er al oc or ti co id th en w ai t? (2 /1 5 ) T re at m en t si m u la ti on : ca se ac cu ra cy M at ch w it h th e re al w or ld :l an gu ag e, co nv en ti on s, ca se ac cu ra cy A s th e si m u la ti on is de si gn ed to pr ov id e pr ac ti ce at pr es cr ib in g ac cu ra te flu id th er ap y, am en d th e ca se da ta so th at th e n ee d fo r flu id tr ea tm en t is cl ea r. Pa rt ic ip an ts tr y u n su cc es sf u lly to se le ct an d ap pl y m u lt ip le tr ea tm en ts si m u lt an eo u sl y; th e ap pl ic at io n is de si gn ed to h av e on e tr ea tm en t gi ve n at a ti m e, w it h fe ed ba ck su pp lie d af te r ea ch st ep . (1 0 /1 5 ) T re at m en t si m u la ti on : tr ea tm en t se le ct io n Er ro r pr ev en ti on an d to le ra n ce R em ov e al lp an el co ve rs fr om th e tr ea tm en t op ti on gr ou ps so th at u se rs cl ea rl y se e th at on ly on e op ti on ca n be se le ct ed an d ap pl ie d at a ti m e. R ei n fo rc e th is in th e in fo rm at io n pr ov id ed ju st be fo re th e si m u la ti on is

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

How We Teach Development and evaluation of a multimedia e-learning resource for electrolyte and acid-base disorders

Davids MR, Chikte UME, Halperin ML. Development and evaluation of a multimedia e-learning resource for electrolyte and acid-base disorders. Adv Physiol Educ 35: 295–306, 2011; doi:10.1152/advan.00127.2010.—This article reports on the development and evaluation of a Web-based application that provides instruction and hands-on practice in managing electrolyte and acidbase disorders. Our teaching ...

متن کامل

Development and evaluation of a multimedia e-learning resource for electrolyte and acid-base disorders.

This article reports on the development and evaluation of a Web-based application that provides instruction and hands-on practice in managing electrolyte and acid-base disorders. Our teaching approach, which focuses on concepts rather than details, encourages quantitative analysis and a logical problem-solving approach. Identifying any dangers to the patient is a vital first step. Concepts such...

متن کامل

Effect of levels of realism of mobile-based pedagogical agents on health e-learning

Background: One of the ways for effective communication between learners and instructional multimedia content in mobile learning systems is taking advantage of characters or pedagogical agents. The present study aimed to investigate the effect of the levels of realism in mobile-based pedagogical agents on health e-learning. Methods: The s...

متن کامل

An efficient approach to improve the usability of e-learning resources: the role of heuristic evaluation.

Optimizing the usability of e-learning materials is necessary to maximize their potential educational impact, but this is often neglected when time and other resources are limited, leading to the release of materials that cannot deliver the desired learning outcomes. As clinician-teachers in a resource-constrained environment, we investigated whether heuristic evaluation of our multimedia e-lea...

متن کامل

A standard Interactive Multimedia eBook Generator Engine for e-Learning Process

Introduction: Using standard authoring tools is essential to promote E-Learning in teaching-learning process. Learning content in medical sciences often consists of multimedia elements. On the other hand, it is frequently required to revise and update the medical content. Hence, access to the authoring tools that can encompass multimedia elements and allow easy content revision is helpful in e-...

متن کامل

Personalization in the design and production of multimedia in the e-learning environment

Aims: E-learning, is a phenomenon of the modern world That in the information era and the knowledgebased society has come into existence and in its short history has been expanded with considerable speed. The impact and application of the principle of personalization in the design and production of multimedia products in its electronic. Methods: This article is a review of the literature ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • BJET

دوره 45  شماره 

صفحات  -

تاریخ انتشار 2014